29 research outputs found

    Search Heuristics, Case-Based Reasoning and Software Project Effort Prediction

    Get PDF
    This paper reports on the use of search techniques to help optimise a case-based reasoning (CBR) system for predicting software project effort. A major problem, common to ML techniques in general, has been dealing with large numbers of case features, some of which can hinder the prediction process. Unfortunately searching for the optimal feature subset is a combinatorial problem and therefore NP-hard. This paper examines the use of random searching, hill climbing and forward sequential selection (FSS) to tackle this problem. Results from examining a set of real software project data show that even random searching was better than using all available for features (average error 35.6% rather than 50.8%). Hill climbing and FSS both produced results substantially better than the random search (15.3 and 13.1% respectively), but FSS was more computationally efficient. Providing a description of the fitness landscape of a problem along with search results is a step towards the classification of search problems and their assignment to optimum search techniques. This paper attempts to describe the fitness landscape of this problem by combining the results from random searches and hill climbing, as well as using multi-dimensional scaling to aid visualisation. Amongst other findings, the visualisation results suggest that some form of heuristic-based initialisation might prove useful for this problem

    Using blind analysis for software engineering experiments

    Get PDF
    Context: In recent years there has been growing concern about conflicting experimental results in empirical software engineering. This has been paralleled by awareness of how bias can impact research results. Objective: To explore the practicalities of blind analysis of experimental results to reduce bias. Method : We apply blind analysis to a real software engineering experiment that compares three feature weighting approaches with a na ̈ıve benchmark (sample mean) to the Finnish software effort data set. We use this experiment as an example to explore blind analysis as a method to reduce researcher bias. Results: Our experience shows that blinding can be a relatively straightforward procedure. We also highlight various statistical analysis decisions which ought not be guided by the hunt for statistical significance and show that results can be inverted merely through a seemingly inconsequential statistical nicety (i.e., the degree of trimming). Conclusion: Whilst there are minor challenges and some limits to the degree of blinding possible, blind analysis is a very practical and easy to implement method that supports more objective analysis of experimental results. Therefore we argue that blind analysis should be the norm for analysing software engineering experiments

    Understanding object feature binding through experimentation as a precursor to modelling

    Get PDF
    In order to explore underlying brain mechanisms and to further understand how and where object feature binding occurs, psychophysical data are analysed and will be modelled using an attractor network. This paper describes psychophysical work and an outline of the proposed model. A rapid serial visual processing paradigm with a post-cue response task was used in three experimental conditions: spatial, temporal and spatio-temporal. Using a ‘staircase’ procedure, stimulus onset asynchrony for each observer for each condition was set in practice trails to achieve ~50% error rates. Results indicate that spatial location information helps bind objects features and temporal location information hinders it. Our expectation is that the proposed neural model will demonstrate a binding mechanism by exhibiting regions of enhanced activity in the location of the target when presented with a partial post-cue. In future work, the model could be lesioned so that neuropsychological phenomena might be exhibited. In such a way, the mechanisms underlying object feature binding might be clarified

    Making inferences with small numbers of training sets

    Get PDF
    A potential methodological problem with empirical studies that assess project effort prediction system is discussed. Frequently, a hold-out strategy is deployed so that the data set is split into a training and a validation set. Inferences are then made concerning the relative accuracy of the different prediction techniques under examination. This is typically done on very small numbers of sampled training sets. It is shown that such studies can lead to almost random results (particularly where relatively small effects are being studied). To illustrate this problem, two data sets are analysed using a configuration problem for case-based prediction and results generated from 100 training sets. This enables results to be produced with quantified confidence limits. From this it is concluded that in both cases using less than five training sets leads to untrustworthy results, and ideally more than 20 sets should be deployed. Unfortunately, this raises a question over a number of empirical validations of prediction techniques, and so it is suggested that further research is needed as a matter of urgency

    Feature weighting techniques for CBR in software effort estimation studies: A review and empirical evaluation

    Get PDF
    Context : Software effort estimation is one of the most important activities in the software development process. Unfortunately, estimates are often substantially wrong. Numerous estimation methods have been proposed including Case-based Reasoning (CBR). In order to improve CBR estimation accuracy, many researchers have proposed feature weighting techniques (FWT). Objective: Our purpose is to systematically review the empirical evidence to determine whether FWT leads to improved predictions. In addition we evaluate these techniques from the perspectives of (i) approach (ii) strengths and weaknesses (iii) performance and (iv) experimental evaluation approach including the data sets used. Method: We conducted a systematic literature review of published, refereed primary studies on FWT (2000-2014). Results: We identified 19 relevant primary studies. These reported a range of different techniques. 17 out of 19 make benchmark comparisons with standard CBR and 16 out of 17 studies report improved accuracy. Using a one-sample sign test this positive impact is significant (p = 0:0003). Conclusion: The actionable conclusion from this study is that our review of all relevant empirical evidence supports the use of FWTs and we recommend that researchers and practitioners give serious consideration to their adoption

    Automated migration of build scripts using dynamic analysis and search-based refactoring

    Get PDF
    The efficiency of a build system is an important factor for developer productivity. As a result, developer teams have been increasingly adopting new build systems that allow higher build parallelization. However, migrating the existing legacy build scripts to new build systems is a tedious and error-prone process. Unfortunately, there is insufficient support for automated migration of build scripts, making the migration more problematic. We propose the first dynamic approach for automated migration of build scripts to new build systems. Our approach works in two phases. First, from a set of execution traces, we synthesize build scripts that accurately capture the intent of the original build. The synthesized build scripts are typically long and hard to maintain. Second, we apply refactorings that raise the abstraction level of the synthesized scripts (e.g., introduce functions for similar fragments). As different refactoring sequences may lead to different build scripts, we use a search-based approach that explores various sequences to identify the best (e.g., shortest) build script. We optimize search-based refactoring with partial-order reduction to faster explore refactoring sequences. We implemented the proposed two phase migration approach in a tool called METAMORPHOSIS that has been recently used at Microsoft

    Search based software engineering: Trends, techniques and applications

    Get PDF
    © ACM, 2012. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version is available from the link below.In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This article provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.EPSRC and E

    Realistic assessment of software effort estimation models

    Get PDF
    Context: It is unclear that current approaches to evaluating or comparing competing software cost or effort models give a realistic picture of how they would perform in actual use. Specifically, we’re concerned that the usual practice of using all data with some holdout strategy is at variance with the reality of a data set growing as projects complete. Objective: This study investigates the impact of using unrealistic, though possibly convenient to the researchers, ways to compare models on commercial data sets. Our questions are does this lead to different conclusions in terms of the comparisons and if so,are the results biased e.g., more optimistic than those that might realistically be achieved in practice. Method: We compare a traditional approach based on leave one out cross-validation with growing the data set chronologically using the Finnish and Desharnais data sets. Results: Our realistic, time-based approach to validation is significantly more conservative than leave-one-out cross-validation (LOOCV) for both data sets. Conclusion: If we want our research to lead to actionable findings it’s incumbent upon the researchers to evaluate their models in realistic ways. This means a departure from LOOCV techniques, while further investigation is needed for other validation techniques, such as k-fold validation

    UNDERSTANDING OBJECT FEATURE BINDING THROUGH EXPERIMENTATION AS A PRECURSOR TO MODELLING

    No full text
    In order to explore underlying brain mechanisms and to further understand how and where object feature binding occurs, psychophysical data are analysed and will be modelled using an attractor network. This paper describes psychophysical work and an outline of the proposed model. A rapid serial visual processing paradigm with a post-cue response task was used in three experimental conditions: spatial, temporal and spatio-temporal. Using a ‘staircase ’ procedure, stimulus onset asynchrony for each observer for each condition was set in practice trails to achieve ~50 % error rates. Results indicate that spatial location information helps bind objects features and temporal location information hinders it. Our expectation is that the proposed neural model will demonstrate a binding mechanism by exhibiting regions of enhanced activity in the location of the target when presented with a partial post-cue. In future work, the model could be lesioned so that neuropsychological phenomena might be exhibited. In such a way, the mechanisms underlying object feature binding might be clarified.

    Implicit learning of nonlocal musical rules: Implicitly learning more than chunks

    Get PDF
    Dominant theories of implicit learning assume that implicit learning merely involves the learning of chunks of adjacent elements in a sequence. In the experiments presented here, participants implicitly learned a nonlocal rule, thus suggesting that implicit learning can go beyond the learning of chunks. Participants were exposed to a set of musical tunes that were all generated using a diatonic inversion. In the subsequent test phase, participants either classified test tunes as obeying a rule (direct test) or rated their liking for the tunes (indirect test). Both the direct and indirect tests were sensitive to knowledge of chunks. However, only the indirect test was sensitive to knowledge of the inversion rule. Furthermore, the indirect test was overall significantly more sensitive than the direct test, thus suggesting that knowledge of the inversion rule was below an objective threshold of awareness
    corecore